In [114]:
%autosave 10
Eurex Tutorial with Examples based on the VSTOXX Volatility Index
Dr. Yves J. Hilpisch
Continuum Analytics Europe GmbH
PyData London – 21. February 2014
You find the presentation and the IPython Notebook here:
A brief bio:
See www.hilpisch.com.
Corporations, decision makers and analysts nowadays generally face a number of problems with data:
In addition to these data-oriented problems, there typically are organizational issues that have to be considered:
At Continuum Analytics, the vision for Python-based data analytics is the following:
“To revolutionize data analytics and visualization by moving high-level Python code and domain expertise closer to data. This vision rests on four pillars:
This tutorial focuses on
It does not address such important issues like
A fundamental Python stack for interactive data analytics and visualization should at least contain the following libraries tools:
It is best to use either the Python distribution Anaconda or the Web-based analytics environment Wakari. Both provide almost "complete" Python environments.
For example, pandas can, among others, help with the following data-related problems:
As a simple example let's generate a NumPy array with five sets of 1000 (pseudo-)random numbers each.
In [1]:
import numpy as np # this imports the NumPy library
In [2]:
data = np.random.standard_normal((5, 1000)) # generate 5 sets with 1000 rn each
data[:, :5].round(3) # print first five values of each set rounded to 3 digits
Out[2]:
Let's plot a histogram of the 1st, 2nd and 3rd data set.
In [3]:
import matplotlib as mpl # this imports matplotlib
import matplotlib.pyplot as plt # this imports matplotlib.pyplot
%matplotlib inline
# inline plotting
In [4]:
plt.hist([data[0], data[1], data[2]], label=['Set 0', 'Set 1', 'Set 2'])
plt.grid(True) # grid for better readability
plt.legend()
Out[4]:
We then want to plot the 'running' cumulative sum of each set.
In [5]:
plt.figure() # initialize figure object
plt.grid(True)
for data_set in enumerate(data): # iterate over all rows
plt.plot(data_set[1].cumsum(), label='Set %s' % data_set[0])
# plot the running cumulative sums for each row
plt.legend(loc=0) # write legend with labels
Out[5]:
Some fundamental statistics from our data sets.
In [6]:
data.mean(axis=1) # average value of the 5 sets
Out[6]:
In [7]:
data.std(axis=1) # standard deviation of the 5 sets
Out[7]:
In [8]:
np.corrcoef(data).round(3) # correltion matrix of the 5 data sets
Out[8]:
We need to make a couple of imports for what is to come.
In [1]:
import pandas as pd
import pandas.io.data as pdd
from urllib import urlretrieve
The convenience function DataReader makes it easy to read historical stock price data from Yahoo! Finance (http://finance.yahoo.com).
In [2]:
index = pdd.DataReader('^GDAXI', data_source='yahoo', start='2007/3/30')
# e.g. the EURO STOXX 50 ticker symbol -- ^SX5E
In [5]:
index.head(n=5)
Out[5]:
In [3]:
index.info()
pandas strength is the handling of indexed/labeled/structured data, like times series data.
In [6]:
index.tail()
Out[6]:
pandas makes it easy to implement vectorized operations, like calculating log-returns over whole time series.
In [7]:
index['Returns'] = np.log(index['Close'] / index['Close'].shift(1))
In addition, pandas makes plotting quite simple and compact.
In [10]:
index[['Close', 'Returns']].plot(subplots=True, style='b', figsize=(8, 5))
Out[10]:
We now want to check how annual volatility changes over time.
In [11]:
index['Mov_Vol'] = pd.rolling_std(index['Returns'], window=252) * np.sqrt(252)
Obviously, the annual volatility changes significantly over time.
In [12]:
index[['Close', 'Returns', 'Mov_Vol']].plot(subplots=True, style='b', figsize=(8, 5))
Out[12]:
Trend-based investment strategy with the EURO STOXX 50 index:
Signal generation:
In [21]:
index["42d"] = pd.rolling_mean(index["Close"], window=42)
index["252d"] = pd.rolling_mean(index["Close"], window=252)
index[["Close", "42d", "252d"]].plot(figsize=(8, 5))
Out[21]:
In [26]:
index["diff"] = index["42d"] - index["252d"]
index[["Close", "diff"]].plot(subplots=True, figsize=(8, 5))
Out[26]:
In [28]:
sigdiff = 100.0
In [30]:
index["Signal"] = np.where(index["diff"] > sigdiff, 1, 0)
index["Signal"] = np.where(index["diff"] < -sigdiff, -1, index["Signal"])
index[["Close", "diff", "Signal"]].plot(subplots=True, figsize=(8, 5))
Out[30]:
In [43]:
# !!AI when writing up maybe exclude log, makes it easier to explain
index["Returns"] = np.log(index["Close"] / index["Close"].shift(1))
index["Strategy"] = (index["Signal"] * index["Returns"])
index["Earnings"] = index["Strategy"].cumsum()
index[["Close", "Signal", "Earnings"]].plot(subplots=True, figsize=(10, 8))
Out[43]:
It is a stylized fact that stock indexes and related volatility indexes are highly negatively correlated. The following example analyzes this stylized fact based on the EURO STOXX 50 stock index and the VSTOXX volatility index using Ordinary Least-Squares regession (OLS).
First, we collect historical data for both the EURO STOXX 50 stock and the VSTOXX volatility index.
In [44]:
import pandas as pd
import datetime as dt
from urllib import urlretrieve
In [45]:
es_url = 'http://www.stoxx.com/download/historical_values/hbrbcpe.txt'
vs_url = 'http://www.stoxx.com/download/historical_values/h_vstoxx.txt'
urlretrieve(es_url, 'es.txt')
urlretrieve(vs_url, 'vs.txt')
Out[45]:
The EURO STOXX 50 data is not yet in the right format. Some house cleaning is necessary (I).
In [46]:
lines = open('es.txt').readlines() # reads the whole file line-by-line
In [47]:
lines[:5] # header not well formatted
Out[47]:
The EURO STOXX 50 data is not yet in the right format. Some house cleaning is necessary (II).
In [49]:
lines[3883:3890] # from 27.12.2001 additional semi-colon
# look; the format changes half-way in the data set!! An additional semi-colon at end. This will throw off pandas.
Out[49]:
The EURO STOXX 50 data is not yet in the right format. Some house cleaning is necessary (III).
In [51]:
# We add an extra "DEL" column so that when we read it in we can delete it after,
# to deal with the additional semi-colon (additional column). Don't forget to
# delete it!
lines = open('es.txt').readlines() # reads the whole file line-by-line
new_file = open('es50.txt', 'w') # opens a new file
new_file.writelines('date' + lines[3][:-1].replace(' ', '') + ';DEL' + lines[3][-1])
# writes the corrected third line (additional column name)
# of the orginal file as first line of new file
new_file.writelines(lines[4:-1]) # writes the remaining lines of the orginal file
The EURO STOXX 50 data is not yet in the right format. Some house cleaning is necessary (IV).
In [52]:
list(open('es50.txt'))[:5] # opens the new file for inspection
Out[52]:
Now, the data can be safely read into a DataFrame object.
In [53]:
es = pd.read_csv('es50.txt', index_col=0, parse_dates=True, sep=';', dayfirst=True)
In [54]:
del es['DEL'] # delete the helper column
In [55]:
es.info()
The VSTOXX data can be read without touching the raw data.
In [56]:
vs = pd.read_csv('vs.txt', index_col=0, header=2, parse_dates=True, sep=',', dayfirst=True)
# you can alternatively read from the Web source directly
# without saving the csv file to disk:
# vs = pd.read_csv(vs_url, index_col=0, header=2,
# parse_dates=True, sep=',', dayfirst=True)
We now merge the data for further analysis.
In [58]:
# Dump EUROSTOXX data that existed before VSTOXX starts, no point
# having it, i.e. all data before 2000-01-01.
import datetime as dt
data = pd.DataFrame({'EUROSTOXX' :
es['SX5E'][es.index > dt.datetime(1999, 12, 31)]})
data = data.join(pd.DataFrame({'VSTOXX' :
vs['V2TX'][vs.index > dt.datetime(1999, 12, 31)]}))
data.info()
Let's inspect the two time series.
In [59]:
data.head()
Out[59]:
A picture can tell almost the complete story.
In [61]:
# Confirms stylized theory. When index falls volatility spikes.
data.plot(subplots=True, grid=True, style='b', figsize=(10, 5))
Out[61]:
We now generate log returns for both time series.
In [63]:
# Log returns helps comparing two different time series in a
# mathematical way. Seems a common pattern.
rets = np.log(data / data.shift(1))
rets.head()
Out[63]:
To this new data set, also stored in a DataFrame object, we apply OLS.
In [64]:
xdat = rets['EUROSTOXX']
ydat = rets['VSTOXX']
model = pd.ols(y=ydat, x=xdat)
model
Out[64]:
Again, we want to see how our results look graphically.
In [66]:
# Again, confirms stylized theory. Highly negative correlation.
plt.plot(xdat, ydat, 'r.')
ax = plt.axis() # grab axis values
x = np.linspace(ax[0], ax[1] + 0.01)
plt.plot(x, model.beta[1] + model.beta[0] * x, 'b', lw=2)
plt.grid(True)
plt.axis('tight')
Out[66]:
Let us see if we can identify systematics over time. And indeed, during the crisis 2007/2008 (yellow dots) volatility has been more pronounced than more recently (red dots).
In [67]:
mpl_dates = mpl.dates.date2num(rets.index)
plt.figure(figsize=(8, 4))
plt.scatter(rets['EUROSTOXX'], rets['VSTOXX'], c=mpl_dates, marker='o')
plt.grid(True)
plt.xlabel('EUROSTOXX')
plt.ylabel('VSTOXX')
plt.colorbar(ticks=mpl.dates.DayLocator(interval=250),
format=mpl.dates.DateFormatter('%d %b %y'))
Out[67]:
We want to test whether the EURO STOXX 50 and/or the VSTOXX returns are normally distributed or not (e.g. if they might have fat tails). We want to do a
Add on: plot a histogram of the log return frequencies and compare that to a normal distribution with same mean and variance (using e.g. norm.pdf from scipy.stats)
In [70]:
import statsmodels.api as sma
import scipy.stats
rets.head()
Out[70]:
In [124]:
r1 = rets["EUROSTOXX"]
print r1.head()
r1.values
Out[124]:
In [95]:
rets = rets.dropna()
In [96]:
# This is a benchmark; normally distributed data looks
# like this.
sma.qqplot(np.random.standard_normal(1000), line='s')
pass
In [97]:
# This is qqplot for classic fat tails.
sma.qqplot(rets["EUROSTOXX"].values, line='s')
pass
In [98]:
# This is qqplot for classic fat tails.
sma.qqplot(rets["VSTOXX"].values, line='s')
pass
pvalues are well below pcritical, so reject nulll hypothesis that either distribution is normal.
In [101]:
scipy.stats.normaltest(rets["EUROSTOXX"].values)
Out[101]:
In [102]:
scipy.stats.normaltest(rets["VSTOXX"].values)
Out[102]:
In [115]:
scipy.stats.shapiro(rets["VSTOXX"].values)
Out[115]:
In [110]:
def normality_tests(array):
print "Skew: %s" % (scipy.stats.skew(array), )
print "Skew test: %s" % (scipy.stats.skewtest(array), )
print "Kurt: %s" % (scipy.stats.kurtosis(array), )
print "Kurt test: %s" % (scipy.stats.kurtosistest(array), )
print "Normal test: %s" % (scipy.stats.normaltest(array), )
In [111]:
normality_tests(np.random.standard_normal(10000))
In [112]:
normality_tests(rets["VSTOXX"].values)
In [88]:
rets.hist(bins=20, figsize=(10, 5))
Out[88]:
There has been a number of studies which have illustrated that constant proportion investments in volatility derivatives – given a diversified equity portfolio – might improve investment performance considerably. See, for instance, the study
The Benefits of Volatility Derivatives in Equity Portfolio Management
We now want to replicate (in a simplified fashion) what you can flexibly test here on the basis of two backtesting applications for VSTOXX-based investment strategies:
The strategy we are going to implement and test is characterized as follows:
We already have the necessary data available. However, we want to drop 'NaN' values and want to normalize the index values.
In [125]:
data = data.dropna()
In [127]:
# Reindex so we compare like to like
data = data / data.ix[0] * 100
In [128]:
data.head()
Out[128]:
First, the initial invest.
In [129]:
invest = 100
cratio = 0.3
data['Equity'] = (1 - cratio) * invest / data['EUROSTOXX'][0]
data['Volatility'] = cratio * invest / data['VSTOXX'][0]
This can already be considered an static investment strategy.
In [130]:
data['Static'] = (data['Equity'] * data['EUROSTOXX']
+ data['Volatility'] * data['VSTOXX'])
In [133]:
# Not amazing, but shows how to start. Wouldn't impress
# an investor.
data[['EUROSTOXX', 'Static']].plot(figsize=(10, 5))
Out[133]:
Second, the dynamic strategy with daily adjustments to keep the value ratio constant.
In [134]:
for i in xrange(1, len(data)):
evalue = data['Equity'][i - 1] * data['EUROSTOXX'][i]
# value of equity position
vvalue = data['Volatility'][i - 1] * data['VSTOXX'][i]
# value of volatility position
tvalue = evalue + vvalue
# total wealth
data['Equity'][i] = (1 - cratio) * tvalue / data['EUROSTOXX'][i]
# re-allocation of total wealth to equity ...
data['Volatility'][i] = cratio * tvalue / data['VSTOXX'][i]
# ... and volatility position
Third, the total wealth position.
In [135]:
data['Dynamic'] = (data['Equity'] * data['EUROSTOXX']
+ data['Volatility'] * data['VSTOXX'])
In [136]:
data.head()
Out[136]:
A brief check if the ratios are indeed constant.
In [137]:
(data['Volatility'] * data['VSTOXX'] / data['Dynamic'])[:5]
Out[137]:
In [138]:
(data['Equity'] * data['EUROSTOXX'] / data['Dynamic'])[:5]
Out[138]:
Let us inspect the performance of the strategy.
In [139]:
data[['EUROSTOXX', 'Dynamic']].plot(figsize=(10, 5))
Out[139]:
Write a Python function which allows for an arbitrary but constant ratio to be invested in the VSTOXX index and which returns net performance values (in percent) for the constant proportion VSTOXX strategy.
Add on: find the ratio to be invested in the VSTOXX that gives the maximum performance.
In [143]:
np.linspace(0, 1, num=20)
Out[143]:
In [157]:
import scipy.optimize
def my_investment(cratio):
invest = 100
data['Equity'] = (1 - cratio) * invest / data['EUROSTOXX'][0]
data['Volatility'] = cratio * invest / data['VSTOXX'][0]
for i in xrange(1, len(data)):
evalue = data['Equity'][i - 1] * data['EUROSTOXX'][i]
# value of equity position
vvalue = data['Volatility'][i - 1] * data['VSTOXX'][i]
# value of volatility position
tvalue = evalue + vvalue
# total wealth
data['Equity'][i] = (1 - cratio) * tvalue / data['EUROSTOXX'][i]
# re-allocation of total wealth to equity ...
data['Volatility'][i] = cratio * tvalue / data['VSTOXX'][i]
# ... and volatility position
data['Dynamic'] = (data['Equity'] * data['EUROSTOXX']
+ data['Volatility'] * data['VSTOXX'])
return -data["Dynamic"][-1]
# :) yay!
# reference: http://scipy-lectures.github.io/advanced/mathematical_optimization/
#scipy.optimize.brent(my_investment) # -512.953971939 for 0.488
print my_investment(0.488)
Using standard Python functionality and pandas, the code that follows reads intraday, high-frequency data from a Web source, plots it and resamples it.
In [158]:
url = 'http://hopey.netfonds.no/posdump.php?'
url += 'date=%s%s%s&paper=AAPL.O&csv_format=csv' % ('2014', '02', '19')
# you may have to adjust the date since only recent dates are available
urlretrieve(url, 'aapl.csv')
Out[158]:
In [159]:
AAPL = pd.read_csv('aapl.csv', index_col=0, header=0, parse_dates=True)
In [160]:
AAPL.info()
The intraday evolution of the Apple stock price.
In [161]:
AAPL['bid'].plot()
Out[161]:
In [162]:
AAPL = AAPL[AAPL.index > dt.datetime(2014, 2, 19, 10, 0, 0)]
# only data later than 10am at that day
A resampling of the data is easily accomplished with pandas.
In [163]:
# this resamples the record frequency to 5 minutes, using mean as aggregation rule
# and fillna(method='ffill') is "forward fill", use last valid value.
AAPL_5min = AAPL.resample(rule='5min', how='mean').fillna(method='ffill')
AAPL_5min.head()
Out[163]:
Let's have a graphical look at the new data set.
In [164]:
AAPL_5min['bid'].plot()
Out[164]:
With pandas you can easily apply custom functions to time series data.
In [165]:
#!!AI how does numexpr factor in here?
AAPL_5min['bid'].apply(lambda x: 2 * 540 - x).plot()
# this mirrors the stock price development at
Out[165]:
10 years ago, Python was considered exotic in the analytics space – at best. Languages/packages like R and Matlab dominated the scene. Today, Python has become a major force in financial analytics & visualization due to a number of characteristics:
One of the easiest ways to deploy Python today across a whole organization with a heterogenous IT infrastructure is via Wakari, Continuum's Web-/Browser- and Python-based Data Analytics environment. It is availble both as a cloud as well as an enterprise solution for in-house deployment.
Continuum Analytics Inc. – the company Web site
Dr. Yves J. Hilpisch – my personal Web site
Derivatives Analytics with Python – my new book
Read an Excerpt and Order the Book
Contact Us